Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Improved image inpainting network incorporating supervised attention module and cross-stage feature fusion
Qiaoling HUANG, Bochuan ZHENG, Zicheng DING, Zedong WU
Journal of Computer Applications    2024, 44 (2): 572-579.   DOI: 10.11772/j.issn.1001-9081.2023020123
Abstract171)   HTML3)    PDF (4672KB)(101)       Save

Image inpainting techniques for non-regular missing regions are versatile but challenging. To address the problem that existing inpainting methods may produce artifacts, distorted structures, and blurred textures for high-resolution images, an improved image inpainting network, named Gconv_CS(Gated convolution based CSFF and SAM) incorporating Supervised Attention Module (SAM) and Cross-Stage Feature Fusion (CSFF) was proposed. In Gconv_CS, the SAM and CSFF were introduced to Cconv, a two-stage network model with gated convolution. SAM ensured the effectiveness of the incoming feature information to the next stage by providing a real image to supervise the output features of the previous stage. CSFF fused the features from the encoder-decoder of the previous stage and fed them to the encoder of the next stage to compensate for the loss of feature information in the previous stage. The experimental results show that, at a percentage of missing regions of 1% to 10%, compared with the baseline model Gconv, on CelebA-HQ dataset, Gconv_CS improved the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM) by 1.5% and 0.5% respectively, reduced Fréchet Inception Distance (FID) and L1 loss by 21.8% and 14.8% respectively; on Place2 dataset, the first two indicators increased by 26.7% and 0.8% respectively, and the latter two indicators decreased by 7.9% and 37.9% respectively. A good restoration effect was achieved when Gconv_CS was used to remove masks from a giant panda’s face.

Table and Figures | Reference | Related Articles | Metrics